116 research outputs found

    CDI-Type II: Collaborative Research: Cyber Enhancement of Spatial Cognition for the Visually Impaired

    Get PDF
    Wayfinding is an essential capability for any person who wishes to have an independent life-style. It requires successful execution of several tasks including navigation and object and place recognition, all of which necessitate accurate assessment of the surrounding environment. For a visually-impaired person these tasks may be exceedingly difficult to accomplish and there are risks associated with failure in any of these. Guide dogs and white canes are widely used for the purpose of navigation and environment sensing, respectively. The former, however, has costly and often prohibitive training requirements, while the latter can only provide cues about obstacles in one\u27s surroundings. Human performance on visual information dependent tasks can be improved by sensing which provides information and environmental cues, such as position, orientation, local geometry, object description, via the use of appropriate sensors and sensor fusion algorithms. Most work on wayfinding aids has focused on outdoor environments and has led to the development of speech-enabled GPS-based navigation systems that provide information describing streets, addresses and points of interest. In contrast, the limited technology that is available for indoor navigation requires significant modification to the building infrastructure, whose high cost has prevented its wide use. This proposal adopts a multi-faceted approach for solving the indoor navigation problem for people with limited vision. It leverages expertise from robotics, computer vision, and blind spatial cognition with behavioral studies on interface design to guide the discovery of information requirements and optimal delivery methods for an indoor navigation system. Designing perception and navigation algorithms, implemented on miniature-size commercially-available hardware, while explicitly considering the spatial cognition capabilities of the visually impaired, will lead to the development of indoor navigation systems that will assist blind people in their wayfinding tasks while facilitating cognitive-map development

    III: Small: Information Integration and Human Interaction for Indoor and Outdoor Spaces

    Get PDF
    The goal of this research project is to provide a framework model that integrates existing models of indoor and outdoor space, and to use this model to develop an interactive platform for navigation in mixed indoor and outdoor spaces. The user should feel the transition between inside and outside to be seamless, in terms of the navigational support provided. The approach consists of integration of indoors and outdoors on several levels: conceptual models (ontologies), formal system designs, data models, and human interaction. At the conceptual level, the project draws on existing ontologies as well as examining the affordances that the space provides. For example, an outside pedestrian walkway affords the same function as an inside corridor. Formal models of place and connection are also used to precisely specify the design of the navigational support system. Behavioral experiments with human participants assess the validity of our framework for supporting human spatial learning and navigation in integrated indoor and outdoor environments. These experiments also enable the identification and extraction of the salient features of indoor and outdoor spaces for incorporation into the framework. Findings from the human studies will help validate the efficacy of our formal framework for supporting human spatial learning and navigation in such integrated environments. Results will be distributed using the project Web site (www.spatial.maine.edu/IOspace) and will be incorporated into graduate level courses on human interaction with mobile devices, shared with public school teachers participating in the University of Maine\u27s NSF-funded RET (Research Experiences for Teachers). The research teams are working with two companies and one research center on technology transfer for building indoor-outdoor navigation tools with a wide range of applications, including those for the persons with disabilities

    III: Small: Information Integration and Human Interaction for Indoor and Outdoor Spaces

    Get PDF
    The goal of this research project is to provide a framework model that integrates existing models of indoor and outdoor space, and to use this model to develop an interactive platform for navigation in mixed indoor and outdoor spaces. The user should feel the transition between inside and outside to be seamless, in terms of the navigational support provided. The approach consists of integration of indoors and outdoors on several levels: conceptual models (ontologies), formal system designs, data models, and human interaction. At the conceptual level, the project draws on existing ontologies as well as examining the affordances that the space provides. For example, an outside pedestrian walkway affords the same function as an inside corridor. Formal models of place and connection are also used to precisely specify the design of the navigational support system. Behavioral experiments with human participants assess the validity of our framework for supporting human spatial learning and navigation in integrated indoor and outdoor environments. These experiments also enable the identification and extraction of the salient features of indoor and outdoor spaces for incorporation into the framework. Findings from the human studies will help validate the efficacy of our formal framework for supporting human spatial learning and navigation in such integrated environments. Results will be distributed using the project Web site (www.spatial.maine.edu/IOspace) and will be incorporated into graduate level courses on human interaction with mobile devices, shared with public school teachers participating in the University of Maine\u27s NSF-funded RET (Research Experiences for Teachers). The research teams are working with two companies and one research center on technology transfer for building indoor-outdoor navigation tools with a wide range of applications, including those for the persons with disabilities

    Evaluation of Non-Visual Panning operations using Touch-Screen Devices

    Get PDF
    This paper summarizes the implementation, evaluation, and usability of non-visual panning operations for accessing graphics rendered on touch screen devices. Four novel non-visual panning techniques were implemented and experimentally evaluated on our experimental prototype, called a Vibro-Audio Interface (VAI), which provides completely non-visual access to graphical information using vibration, audio, and kinesthetic cues on a commercial touch screen device. This demonstration will provide an overview of our system’s functionalities and will discuss the necessity for developing non-visual panning operations enabling visually-impaired people access to large-format graphics (such as maps and floor plans)

    Vertical Color Maps: A Data Independent Alternative to Floor Plan Maps

    Get PDF
    Location sharing in indoor environments is limited by the sparse availability of indoor positioning and lack of geographical building data. Recently, several solutions have begun to implement digital maps for use in indoor space. The map design is often a variant of floor-plan maps. Whereas massive databases and GIS exist for outdoor use, the majority of indoor environments are not yet available in a consistent digital format. This dearth of indoor maps is problematic, as navigating multistorey buildings is known to create greater difficulty in maintaining spatial orientation and developing accurate cognitive maps. The development of standardized, more intuitive indoor maps can address this vexing problem. The authors therefore present an alternative solution to current indoor map design that explores the possibility of using colour to represent the vertical dimension on the map. Importantly, this solution is independent of existing geographical building data. The new design is hypothesized to do a better job than existing solutions of facilitating the integration of indoor spaces. Findings from a human experiment with 251 participants demonstrate that the vertical colour map is a valid alternative to the regular floor-plan map

    S4E9: How can we get the most out of technology?

    Get PDF
    Refrigerators tell us when we’re out of juice. Digital assistants schedule appointments and alert us to the weather forecast. Driverless cars slide into tight parallel parking spaces. Today, many of us increasingly rely on devices, apps and artificial intelligence in our daily lives. How can technology be designed to do the most good? How can scientists make it easy to use and put people, rather than the technology, in charge? This is the work of the University of Maine VEMI Lab. VEMI stands for Virtual Environment and Multimodal Interaction. This week, directors Rick Corey, Nick Giudice and Caitlin Howell talk with host Ron Lisnet about the lab’s mission, its many projects, and the answer to the question: How can we get the most out of technology

    Functional Equivalence of Spatial Images from Touch and Vision: Evidence from Spatial Updating in Blind and Sighted Individuals

    Get PDF
    This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images

    Comparing Map Learning between Touchscreen-Based Visual and Haptic Displays: A Behavioral Evaluation with Blind and Sighted Users

    Get PDF
    The ubiquity of multimodal smart devices affords new opportunities for eyes-free applications for conveying graphical information to both sighted and visually impaired users. Using previously established haptic design guidelines for generic rendering of graphical content on touchscreen interfaces, the current study evaluates the learning and mental representation of digital maps, representing a key real-world translational eyes-free application. Two experiments involving 12 blind participants and 16 sighted participants compared cognitive map development and test performance on a range of spatio-behavioral tasks across three information-matched learning-mode conditions: (1) our prototype vibro-audio map (VAM), (2) traditional hardcopy-tactile maps, and (3) visual maps. Results demonstrated that when perceptual parameters of the stimuli were matched between modalities during haptic and visual map learning, test performance was highly similar (functionally equivalent) between the learning modes and participant groups. These results suggest equivalent cognitive map formation between both blind and sighted users and between maps learned from different sensory inputs, providing compelling evidence supporting the development of amodal spatial representations in the brain. The practical implications of these results include empirical evidence supporting a growing interest in the efficacy of multisensory interfaces as a primary interaction style for people both with and without vision. Findings challenge the long-held assumption that blind people exhibit deficits on global spatial tasks compared to their sighted peers, with results also providing empirical support for the methodological use of sighted participants in studies pertaining to technologies primarily aimed at supporting blind users

    Combining Locations from Working Memory and Long-Term Memory into a Common Spatial Image

    Get PDF
    This research uses a novel integration paradigm to investigate whether target locations read in from long-term memory (LTM) differ from perceptually encoded inputs in spatial working-memory (SWM) with respect to systematic spatial error and/or noise, and whether SWM can simultaneously encompass both of these sources. Our results provide evidence for a composite representation of space in SWM derived from both perception and LTM, albeit with a loss in spatial precision of locations retrieved from LTM. More generally, the data support the concept of a spatial image in working memory and extend its potential sources to representations retrieved from LTM

    Spatial working memory for locations specified by vision and audition: Testing the amodality hypothesis

    Get PDF
    Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal
    • …
    corecore